Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add filters

Database
Language
Document Type
Year range
1.
25th International Conference on Computer and Information Technology, ICCIT 2022 ; : 745-750, 2022.
Article in English | Scopus | ID: covidwho-2277457

ABSTRACT

The COVID-19 pandemic has obligated people to adopt the virtual lifestyle. Currently, the use of videoconferencing to conduct business meetings is prevalent owing to the numerous benefits it presents. However, a large number of people with speech impediment find themselves handicapped to the new normal as they cannot communicate their ideas effectively, especially in fast paced meetings. Therefore, this paper aims to introduce an enriched dataset using an action recognition method with the most common phrases translated into American Sign Language (ASL) that are routinely used in professional meetings. It further proposes a sign language detecting and classifying model employing deep learning architectures, namely, CNN and LSTM. The performances of these models are analysed by employing different performance metrics like accuracy, recall, F1- Score and Precision. CNN and LSTM models yield an accuracy of 93.75% and 96.54% respectively, after being trained with the dataset introduced in this study. Therefore, the incorporation of the LSTM model into different cloud services, virtual private networks and softwares will allow people with speech impairment to use sign language, which will automatically be translated into captions using moving camera circumstances in real time. This will in turn equip other people with the tool to understand and grasp the message that is being conveyed and easily discuss and effectuate the ideas. © 2022 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL